Goto

Collaborating Authors

 online submodular maximization



A Single Recipe for Online Submodular Maximization with Adversarial or Stochastic Constraints

Neural Information Processing Systems

In this paper, we consider an online optimization problem in which the reward functions are DR-submodular, and in addition to maximizing the total reward, the sequence of decisions must satisfy some convex constraints on average. Specifically, at each round $t\in\{1,\dots,T\}$, upon committing to an action $x_t$, a DR-submodular utility function $f_t(\cdot)$ and a convex constraint function $g_t(\cdot)$ are revealed, and the goal is to maximize the overall utility while ensuring the average of the constraint functions $\frac{1}{T}\sum_{t=1}^T g_t(x_t)$ is non-positive. Such cumulative constraints arise naturally in applications where the average resource consumption is required to remain below a prespecified threshold. We study this problem under an adversarial model and a stochastic model for the convex constraints, where the functions $g_t$ can vary arbitrarily or according to an i.i.d.


Improved Algorithms for Online Submodular Maximization via First-order Regret Bounds

Neural Information Processing Systems

We consider the problem of nonnegative submodular maximization in the online setting. At time step t, an algorithm selects a set S t. The goal is to design an efficient algorithm for minimizing the expected approximate regret. In this work, we give a general approach for improving regret bounds in online submodular maximization by exploiting "first-order" regret bounds for online linear optimization.



Review for NeurIPS paper: A Single Recipe for Online Submodular Maximization with Adversarial or Stochastic Constraints

Neural Information Processing Systems

Summary and Contributions: The paper considers the problem of maximizing a general monotone DR-submodular function subject to a general convex constraint (general up to some natural assumptions) in the online regret-minimization setting. The paper presents two algorithms for this problem, and proves bounds on their regret (with respect to the 1-1/e offline approximation) as well as the extent to which they violate the constraint on average. In that respect, the paper considers three kinds of regrets: - The traditional adversarial static regret in which the input is selected by an adversary and the algorithm competes with the best single solution in hindsight. For some of these benchmarks there are previous results for the special case in which the constraints are linear. The current paper improves over them both in terms of the generality of the constraint, and in terms of the quality of the guarantees.


Review for NeurIPS paper: Improved Algorithms for Online Submodular Maximization via First-order Regret Bounds

Neural Information Processing Systems

Weaknesses: This paper has a few weaknesses which are as follows: - Although the authors have cited [9] and [10] as previous literature on online submodular maximization, they have not compared their obtained regret bounds with the bounds of [9] and [10]. These two papers consider the online continuous submodular maximization problem (as opposed to submodular set function maximization in this work) and may seem different in the first look, however, since the multilinear extension of submodular set functions are continuous submodular, combination of the continuous algorithms in [9] and [10] and a lossless rounding technique such as pipage rounding could be used for discrete problems. In particular, the algorithms of [9] and [10] are almost identical to Algorithm 1 of this paper and the only novelty of Algorithm 1 is the use of g f-l (as opposed to f) as the utility function which leads to \alpha-regret bounds with curvature-dependent approximation ratio \alpha. I noticed that this issue has been addressed in more detail in the appendix, however, I think it's important to discuss the computational complexity of the discretized version of Algorithm 1 in the paper as well. In particular, implementing this algorithm on a real-world problem and comparing the performance for different discretization levels and various number of samples could have made the paper even better.


A Single Recipe for Online Submodular Maximization with Adversarial or Stochastic Constraints

Neural Information Processing Systems

In this paper, we consider an online optimization problem in which the reward functions are DR-submodular, and in addition to maximizing the total reward, the sequence of decisions must satisfy some convex constraints on average. Specifically, at each round t\in\{1,\dots,T\}, upon committing to an action x_t, a DR-submodular utility function f_t(\cdot) and a convex constraint function g_t(\cdot) are revealed, and the goal is to maximize the overall utility while ensuring the average of the constraint functions \frac{1}{T}\sum_{t 1} T g_t(x_t) is non-positive. Such cumulative constraints arise naturally in applications where the average resource consumption is required to remain below a prespecified threshold. We study this problem under an adversarial model and a stochastic model for the convex constraints, where the functions g_t can vary arbitrarily or according to an i.i.d. We propose a single algorithm which achieves sub-linear (with respect to T) regret as well as sub-linear constraint violation bounds in both settings, without prior knowledge of the regime.


Improved Algorithms for Online Submodular Maximization via First-order Regret Bounds

Neural Information Processing Systems

We consider the problem of nonnegative submodular maximization in the online setting. At time step t, an algorithm selects a set St C 2 V where C is a feasible family of sets. An adversary then reveals a submodular function ft. The goal is to design an efficient algorithm for minimizing the expected approximate regret. In this work, we give a general approach for improving regret bounds in online submodular maximization by exploiting "first-order" regret bounds for online linear optimization.